Kappa coefficient: a popular measure of rater agreement
نویسندگان
چکیده
In mental health and psychosocial studies it is often necessary to report on the between-rater agreement of measures used in the study. This paper discusses the concept of agreement, highlighting its fundamental difference from correlation. Several examples demonstrate how to compute the kappa coefficient - a popular statistic for measuring agreement - both by hand and by using statistical software packages such as SAS and SPSS. Real study data are used to illustrate how to use and interpret this coefficient in clinical research and practice. The article concludes with a discussion of the limitations of the coefficient.
منابع مشابه
بررس پایایی رادیولوژیست ها و عملکرد آنها در تشخیص وخامت توده های تخمدان از روی سونوگرافی
Background: Intra-rater agreement in observing and decision making in diagnosis of any disease is of great importance.This investigation is to observe and read ultrasound pictures of ovarian cysts and distinguish its category for any radiologist. Distinguishability is one of the related entities in this matter and radiologists;apos ability in correct diagnosis is of great concern. In this study...
متن کاملTest-Retest and Inter-Rater Reliability Study of the Schedule for Oral-Motor Assessment in Persian Children
Objectives: Reliable and valid clinical tools to screen, diagnose, and describe eating functions and dysphagia in children are highly warranted. Today most specialists are aware of the role of assessment scales in the treatment of affected individuals. However, the problem is that the clinical tools used might be nonstandard, and worldwide, there is no integrated assessment performed to assess ...
متن کاملAgreement Between an Isolated Rater and a Group of 1 Raters
11 The agreement between two raters judging items on a categorical scale 12 is traditionally measured by Cohen’s kappa coefficient. We introduce a new 13 coefficient for quantifying the degree of agreement between an isolated rater 14 and a group of raters on a nominal or ordinal scale. The coefficient, which 15 is defined on a population-based model, requires a specific definition of the 16 co...
متن کاملBeyond kappa: A review of interrater agreement measures
In 1960, Cohen introduced the kappa coefficient to measure chance-corrected nominal scale agreement between two raters. Since then, numerous extensions and generalizations of this interrater agreement measure have been proposed in the literature. This paper reviews and critiques various approaches to the study of interrater agreement, for which the relevant data comprise either nominal or ordin...
متن کاملA comparison of Cohen’s Kappa and Gwet’s AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples
BACKGROUND Rater agreement is important in clinical research, and Cohen's Kappa is a widely used method for assessing inter-rater reliability; however, there are well documented statistical problems associated with the measure. In order to assess its utility, we evaluated it against Gwet's AC1 and compared the results. METHODS This study was carried out across 67 patients (56% males) aged 18 ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره 27 شماره
صفحات -
تاریخ انتشار 2015